Bibliography
199
[277] Kaicheng Yu, Christian Sciuto, Martin Jaggi, Claudiu Musat, and Mathieu Salz-
mann.
Evaluating the search phase of neural architecture search.
arXiv preprint
arXiv:1902.08142, 2019.
[278] Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. Multi-modal factorized bilinear
pooling with co-attention learning for visual question answering. In Proc. of ICCV,
pages 1821–1830, 2017.
[279] Ali Hadi Zadeh, Isak Edo, Omar Mohamed Awad, and Andreas Moshovos. Gobo:
Quantizing attention-based nlp models for low latency and energy efficient inference.
In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MI-
CRO), pages 811–824. IEEE, 2020.
[280] Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8bert: Quantized
8bit bert. In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cogni-
tive Computing-NeurIPS Edition (EMC2-NIPS), pages 36–39. IEEE, 2019.
[281] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of
the British Machine Vision Conference, pages 1–15, 2016.
[282] Matthew D Zeiler.
Adadelta: an adaptive learning rate method.
arXiv preprint
arXiv:1212.5701, 2012.
[283] Baochang Zhang, Alessandro Perina, Zhigang Li, Vittorio Murino, Jianzhuang Liu,
and Rongrong Ji. Bounding multiple gaussians uncertainty with application to object
tracking. International journal of computer vision, 118:364–379, 2016.
[284] Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned
quantization for highly accurate and compact deep neural networks. In Proceedings
of the European conference on computer vision (ECCV), pages 365–382, 2018.
[285] Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu.
Ternarybert: Distillation-aware ultra-low bit bert. arXiv preprint arXiv:2009.12812,
2020.
[286] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun.
Shufflenet: An ex-
tremely efficient convolutional neural network for mobile devices.
arXiv preprint
arXiv:1707.01083, 2017.
[287] Junhe Zhao, Sheng Xu, Baochang Zhang, Jiaxin Gu, David Doermann, and Guodong
Guo. Towards compact 1-bit cnns via bayesian learning. International Journal of
Computer Vision, pages 1–25, 2022.
[288] Feng Zheng, Cheng Deng, and Heng Huang. Binarized neural networks for resource-
efficient hashing with minimizing quantization loss. In IJCAI, pages 1032–1040, 2019.
[289] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian.
Scalable person re-identification: A benchmark. Proceedings of the IEEE International
Conference on Computer Vision, pages 1116–1124, 2015.
[290] Shixuan Zheng, Peng Ouyang, Dandan Song, Xiudong Li, Leibo Liu, Shaojun Wei, and
Shouyi Yin. An ultra-low power binarized convolutional neural network-based speech
recognition processor with on-chip self-learning. IEEE Transactions on Circuits and
Systems I: Regular Papers, 66(12):4648–4661, 2019.